103 research outputs found

    Synchrotron radiation circular dichroism: a new tool for identification of point-mutation protein

    Get PDF
    AbstractMany diseases are associated with the mutation of wild-type proteins. Usually, a point mutation can lead to severe clinical outcomes. Few techniques have the ability to detect the minute differences between the wild-type and mutant proteins in solution under near physiological conditions. Circular dichroism (CD) is an established and valuable technique for examining protein structure. Because of its ability to sensitively detect conformational changes, it has important potential for identification of mutant protein. Synchrotron radiation CD (SRCD) offers significant enhancements with respect to conventional CD spectroscopy, which will enable its usage for high-resolution conformation detection and as a tool in the point-mutation protein identification. In this report, SRCD was used, as an example, to identify the point-mutations of human phosphoribosyl pyrophosphate synthetase 1 which were associated with an X chromosome-linked disease

    HyCLASSS: A Hybrid Classifier for Automatic Sleep Stage Scoring

    Get PDF
    Automatic identification of sleep stage is an important step in a sleep study. In this paper, we propose a hybrid automatic sleep stage scoring approach, named HyCLASSS, based on single channel electroencephalogram (EEG). HyCLASSS, for the first time, leverages both signal and stage transition features of human sleep for automatic identification of sleep stages. HyCLASSS consists of two parts: A random forest classifier and correction rules. Random forest classifier is trained using 30 EEG signal features, including temporal, frequency, and nonlinear features. The correction rules are constructed based on stage transition feature, importing the continuity property of sleep, and characteristic of sleep stage transition. Compared with the gold standard of manual scoring using Rechtschaffen and Kales criterion, the overall accuracy and kappa coefficient applied on 198 subjects has reached 85.95% and 0.8046 in our experiment, respectively. The performance of HyCLASS compared favorably to previous work, and it could be integrated with sleep evaluation or sleep diagnosis system in the future

    Mining Non-Lattice Subgraphs for Detecting Missing Hierarchical Relations and Concepts in SNOMED CT

    Get PDF
    Objective: Quality assurance of large ontological systems such as SNOMED CT is an indispensable part of the terminology management lifecycle. We introduce a hybrid structural-lexical method for scalable and systematic discovery of missing hierarchical relations and concepts in SNOMED CT. Material and Methods: All non-lattice subgraphs (the structural part) in SNOMED CT are exhaustively extracted using a scalable MapReduce algorithm. Four lexical patterns (the lexical part) are identified among the extracted non-lattice subgraphs. Non-lattice subgraphs exhibiting such lexical patterns are often indicative of missing hierarchical relations or concepts. Each lexical pattern is associated with a potential specific type of error. Results: Applying the structural-lexical method to SNOMED CT (September 2015 US edition), we found 6801 non-lattice subgraphs that matched these lexical patterns, of which 2046 were amenable to visual inspection. We evaluated a random sample of 100 small subgraphs, of which 59 were reviewed in detail by domain experts. All the subgraphs reviewed contained errors confirmed by the experts. The most frequent type of error was missing is-a relations due to incomplete or inconsistent modeling of the concepts. Conclusions: Our hybrid structural-lexical method is innovative and proved effective not only in detecting errors in SNOMED CT, but also in suggesting remediation for these errors

    NHash: Randomized N-Gram Hashing for Distributed Generation of Validatable Unique Study Identifiers in Multicenter Research

    Get PDF
    BACKGROUND: A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. OBJECTIVE: This paper introduces a novel method called Randomized N-gram Hashing (NHash), for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. METHODS: NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result of the encryption, concatenated with the random number r, is the final NHash study identifier. RESULTS: We performed experiments using a large synthesized dataset comparing NHash with random strings, and demonstrated neglegible probability for collision. We implemented NHash for the Center for SUDEP Research (CSR), a National Institute for Neurological Disorders and Stroke-funded Center Without Walls for Collaborative Research in the Epilepsies. This multicenter collaboration involves 14 institutions across the United States and Europe, bringing together extensive and diverse expertise to understand sudden unexpected death in epilepsy patients (SUDEP). CONCLUSIONS: The CSR Data Repository has successfully used NHash to link deidentified multimodal clinical data collected in participating CSR institutions, meeting all desired objectives of NHash

    Web-Based Interactive Mapping from Data Dictionaries to Ontologies, with an Application to Cancer Registry

    Get PDF
    BACKGROUND: The Kentucky Cancer Registry (KCR) is a central cancer registry for the state of Kentucky that receives data about incident cancer cases from all healthcare facilities in the state within 6 months of diagnosis. Similar to all other U.S. and Canadian cancer registries, KCR uses a data dictionary provided by the North American Association of Central Cancer Registries (NAACCR) for standardized data entry. The NAACCR data dictionary is not an ontological system. Mapping between the NAACCR data dictionary and the National Cancer Institute (NCI) Thesaurus (NCIt) will facilitate the enrichment, dissemination and utilization of cancer registry data. We introduce a web-based system, called Interactive Mapping Interface (IMI), for creating mappings from data dictionaries to ontologies, in particular from NAACCR to NCIt. METHOD: IMI has been designed as a general approach with three components: (1) ontology library; (2) mapping interface; and (3) recommendation engine. The ontology library provides a list of ontologies as targets for building mappings. The mapping interface consists of six modules: project management, mapping dashboard, access control, logs and comments, hierarchical visualization, and result review and export. The built-in recommendation engine automatically identifies a list of candidate concepts to facilitate the mapping process. RESULTS: We report the architecture design and interface features of IMI. To validate our approach, we implemented an IMI prototype and pilot-tested features using the IMI interface to map a sample set of NAACCR data elements to NCIt concepts. 47 out of 301 NAACCR data elements have been mapped to NCIt concepts. Five branches of hierarchical tree have been identified from these mapped concepts for visual inspection. CONCLUSIONS: IMI provides an interactive, web-based interface for building mappings from data dictionaries to ontologies. Although our pilot-testing scope is limited, our results demonstrate feasibility using IMI for semantic enrichment of cancer registry data by mapping NAACCR data elements to NCIt concepts

    A mendelian randomization study investigates the causal relationship between immune cell phenotypes and cerebral aneurysm

    Get PDF
    Background: Cerebral aneurysms (CAs) are a significant cerebrovascular ailment with a multifaceted etiology influenced by various factors including heredity and environment. This study aimed to explore the possible link between different types of immune cells and the occurrence of CAs.Methods: We analyzed the connection between 731 immune cell signatures and the risk of CAs by using publicly available genetic data. The analysis included four immune features, specifically median brightness levels (MBL), proportionate cell (PC), definite cell (DC), and morphological attributes (MA). Mendelian randomization (MR) analysis was conducted using the instrumental variables (IVs) derived from the genetic variation linked to CAs.Results: After multiple test adjustment based on the FDR method, the inverse variance weighted (IVW) method revealed that 3 immune cell phenotypes were linked to the risk of CAs. These included CD45 on HLA DR+NK (odds ratio (OR), 1.116; 95% confidence interval (CI), 1.001–1.244; p = 0.0489), CX3CR1 on CD14− CD16− (OR, 0.973; 95% CI, 0.948–0.999; p = 0.0447). An immune cell phenotype CD16− CD56 on NK was found to have a significant association with the risk of CAs in reverse MR study (OR, 0.950; 95% CI, 0.911–0.990; p = 0.0156).Conclusion: Our investigation has yielded findings that support a substantial genetic link between immune cells and CAs, thereby suggesting possible implications for future clinical interventions

    Individualized Clinical Practice Guidelines for Pressure Injury Management: Development of an Integrated Multi-Modal Biomedical Information Resource

    Get PDF
    Background: Pressure ulcers (PU) and deep tissue injuries (DTI), collectively known as pressure injuries are serious complications causing staggering costs and human suffering with over 200 reported risk factors from many domains. Primary pressure injury prevention seeks to prevent the first incidence, while secondary PU/DTI prevention aims to decrease chronic recurrence. Clinical practice guidelines (CPG) combine evidence-based practice and expert opinion to aid clinicians in the goal of achieving best practices for primary and secondary prevention. The correction of all risk factors can be both overwhelming and impractical to implement in clinical practice. There is a need to develop practical clinical tools to prioritize the multiple recommendations of CPG, but there is limited guidance on how to prioritize based on individual cases. Bioinformatics platforms enable data management to support clinical decision support and user-interface development for complex clinical challenges such as pressure injury prevention care planning. Objective: The central hypothesis of the study is that the individual’s risk factor profile can provide the basis for adaptive, personalized care planning for PU prevention based on CPG prioritization. The study objective is to develop the Spinal Cord Injury Pressure Ulcer and Deep Tissue Injury (SCIPUD+) Resource to support personalized care planning for primary and secondary PU/DTI prevention. Methods: The study is employing a retrospective electronic health record (EHR) chart review of over 75 factors known to be relevant for pressure injury risk in individuals with a spinal cord injury (SCI) and routinely recorded in the EHR. We also perform tissue health assessments of a selected sub-group. A systems approach is being used to develop and validate the SCIPUD+ Resource incorporating the many risk factor domains associated with PU/DTI primary and secondary prevention, ranging from the individual’s environment to local tissue health. Our multiscale approach will leverage the strength of bioinformatics applied to an established national EHR system. A comprehensive model is being used to relate the primary outcome of interest (PU/DTI development) with over 75 PU/DTI risk factors using a retrospective chart review of 5000 individuals selected from the study cohort of more than 36,000 persons with SCI. A Spinal Cord Injury Pressure Ulcer and Deep Tissue Injury Ontology (SCIPUDO) is being developed to enable robust text-mining for data extraction from free-form notes. Results: The results from this study are pending. Conclusions: PU/DTI remains a highly significant source of morbidity for individuals with SCI. Personalized interactive care plans may decrease both initial PU formation and readmission rates for high-risk individuals. The project is using established EHR data to build a comprehensive, structured model of environmental, social and clinical pressure injury risk factors. The comprehensive SCIPUD+ health care tool will be used to relate the primary outcome of interest (pressure injury development) with covariates including environmental, social, clinical, personal and tissue health profiles as well as possible interactions among some of these covariates. The study will result in a validated tool for personalized implementation of CPG recommendations and has great potential to change the standard of care for PrI clinical practice by enabling clinicians to provide personalized application of CPG priorities tailored to the needs of each at-risk individual with SCI

    Caffeine intake antagonizes salt sensitive hypertension through improvement of renal sodium handling

    Get PDF
    High salt intake is a major risk factor for hypertension. Although acute caffeine intake produces moderate diuresis and natriuresis, caffeine increases the blood pressure (BP) through activating sympathetic activity. However, the long-term effects of caffeine on urinary sodium excretion and blood pressure are rarely investigated. Here, we investigated whether chronic caffeine administration antagonizes salt sensitive hypertension by promoting urinary sodium excretion. Dahl salt-sensitive (Dahl-S) rats were fed with high salt diet with or without 0.1% caffeine in drinking water for 15 days. The BP, heart rate and locomotor activity of rats was analyzed and urinary sodium excretion was determined. The renal epithelial Na+ channel (ENaC) expression and function were measured by in vivo and in vitro experiments. Chronic consumption of caffeine attenuates hypertension induced by high salt without affecting sympathetic nerve activity in Dahl-S rats. The renal α-ENaC expression and ENaC activity of rats decreased after chronic caffeine administration. Caffeine increased phosphorylation of AMPK and decrease α-ENaC expression in cortical collecting duct cells. Inhibiting AMPK abolished the effect of caffeine on α-ENaC. Chronic caffeine intake prevented the development of salt-sensitive hypertension through promoting urinary sodium excretion, which was associated with activation of renal AMPK and inhibition of renal tubular ENaC

    Intelligent Computing: The Latest Advances, Challenges and Future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human-computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing. Intelligent computing is still in its infancy and an abundance of innovations in the theories, systems, and applications of intelligent computing are expected to occur soon. We present the first comprehensive survey of literature on intelligent computing, covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives. We believe that this survey is highly timely and will provide a comprehensive reference and cast valuable insights into intelligent computing for academic and industrial researchers and practitioners

    The association between fibrinogen levels and severity of coronary artery disease and long-term prognosis following percutaneous coronary intervention in patients with type 2 diabetes mellitus

    Get PDF
    BackgroundFibrinogen is a potential risk factor for the prognosis of CAD and is associated with the complexity of CAD. There is limited research specifically investigating the predictive role of fibrinogen in determining the severity of CAD among patients with T2DM, as well as its impact on the prognosis following PCI.MethodsThe study included 675 T2DM patients who underwent PCI at the Third People’s Hospital of Chengdu between April 27, 2018, and February 5, 2021, with 540 of them remaining after exclusions. The complexity of CAD was assessed using the SYNTAX score. The primary endpoint of the study was the incidence of MACCEs.ResultsAfter adjusting for multiple confounding factors, fibrinogen remained a significant independent risk factor for mid/high SYNTAX scores (SYNTAX score > 22, OR 1.184, 95% CI 1.022-1.373, P = 0.025). Additionally, a dose-response relationship between fibrinogen and the risk of complicated CAD was observed (SYNTAX score > 22; nonlinear P = 0.0043). The area under the receiver operating characteristic curve(AUROC) of fibrinogen for predicting mid/high SYNTAX score was 0.610 (95% CI 0.567–0.651, P = 0.0002). The high fibrinogen group (fibrinogen > 3.79 g/L) had a higher incidence of calcified lesions and an elevated trend of more multivessel disease and chronic total occlusion. A total of 116 patients (21.5%) experienced MACCEs during the median follow-up time of 18.5 months. After adjustment, multivariate Cox regression analysis confirmed that fibrinogen (HR, 1.138; 95% CI 1.010-1.284, P = 0.034) remained a significant independent risk factor for MACCEs. The AUROC of fibrinogen for predicting MACCEs was 0.609 (95% CI 0.566-0.650, P = 0.0002). Individuals with high fibrinogen levels (fibrinogen > 4.28 g/L) had a higher incidence of acute myocardial infarction (P < 0.001), MACCEs (P < 0.001), all-cause death (P < 0.001), stroke (P = 0.030), and cardiac death (P = 0.002). Kaplan-Meier analysis revealed a higher incidence of MACCEs in the high fibrinogen group (Log-Rank test: P < 0.001).ConclusionsElevated fibrinogen levels were associated with increased coronary anatomical complexity (as quantified by the SYNTAX score) and a higher incidence of MACCEs after PCI in patients with T2DM
    • …
    corecore